21,135 research outputs found

    Heuristic Ternary Error-Correcting Output Codes Via Weight Optimization and Layered Clustering-Based Approach

    Full text link
    One important classifier ensemble for multiclass classification problems is Error-Correcting Output Codes (ECOCs). It bridges multiclass problems and binary-class classifiers by decomposing multiclass problems to a serial binary-class problems. In this paper, we present a heuristic ternary code, named Weight Optimization and Layered Clustering-based ECOC (WOLC-ECOC). It starts with an arbitrary valid ECOC and iterates the following two steps until the training risk converges. The first step, named Layered Clustering based ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing binary-class problem. The second step adds the new classifiers to ECOC by a novel Optimized Weighted (OW) decoding algorithm, where the optimization problem of the decoding is solved by the cutting plane algorithm. Technically, LC-ECOC makes the heuristic training process not blocked by some difficult binary-class problem. OW decoding guarantees the non-increase of the training risk for ensuring a small code length. Results on 14 UCI datasets and a music genre classification problem demonstrate the effectiveness of WOLC-ECOC

    Modular invariants and singularity indices of hyperelliptic fibrations

    Full text link
    The modular invariants of a family of semistable curves are the degrees of the corresponding divisors on the image of the moduli map. The singularity indices were introduced by G. Xiao to classify singular fibers of hyperelliptic fibrations and to compute global invariants locally. In the semistable case, we show that the modular invariants corresponding with the boundary classes are just the singularity indices. As an application, we show that the formula of Xiao for relative Chern numbers is the same as that of Cornalba-Harris in the semistable case.Comment: To appear in Chin. Ann. Math. (B

    Learning the kernel matrix by resampling

    Full text link
    In this abstract paper, we introduce a new kernel learning method by a nonparametric density estimator. The estimator consists of a group of k-centroids clusterings. Each clustering randomly selects data points with randomly selected features as its centroids, and learns a one-hot encoder by one-nearest-neighbor optimization. The estimator generates a sparse representation for each data point. Then, we construct a nonlinear kernel matrix from the sparse representation of data. One major advantage of the proposed kernel method is that it is relatively insensitive to its free parameters, and therefore, it can produce reasonable results without parameter tuning. Another advantage is that it is simple. We conjecture that the proposed method can find its applications in many learning tasks or methods where sparse representation or kernel matrix is explored. In this preliminary study, we have applied the kernel matrix to spectral clustering. Our experimental results demonstrate that the kernel generated by the proposed method outperforms the well-tuned Gaussian RBF kernel. This abstract paper is used to protect the idea, full versions will be updated later

    Multilayer bootstrap network for unsupervised speaker recognition

    Full text link
    We apply multilayer bootstrap network (MBN), a recent proposed unsupervised learning method, to unsupervised speaker recognition. The proposed method first extracts supervectors from an unsupervised universal background model, then reduces the dimension of the high-dimensional supervectors by multilayer bootstrap network, and finally conducts unsupervised speaker recognition by clustering the low-dimensional data. The comparison results with 2 unsupervised and 1 supervised speaker recognition techniques demonstrate the effectiveness and robustness of the proposed method

    Linear Regression for Speaker Verification

    Full text link
    This paper presents a linear regression based back-end for speaker verification. Linear regression is a simple linear model that minimizes the mean squared estimation error between the target and its estimate with a closed form solution, where the target is defined as the ground-truth indicator vectors of utterances. We use the linear regression model to learn speaker models from a front-end, and verify the similarity of two speaker models by a cosine similarity scoring classifier. To evaluate the effectiveness of the linear regression model, we construct three speaker verification systems that use the Gaussian mixture model and identity-vector (GMM/i-vector) front-end, deep neural network and i-vector (DNN/i-vector) front-end, and deep vector (d-vector) front-end as their front-ends, respectively. Our empirical comparison results on the NIST speaker recognition evaluation data sets show that the proposed method outperforms within-class covariance normalization, linear discriminant analysis, and probabilistic linear discriminant analysis, given any of the three front-ends

    Learning Deep Representations By Distributed Random Samplings

    Full text link
    In this paper, we propose an extremely simple deep model for the unsupervised nonlinear dimensionality reduction -- deep distributed random samplings, which performs like a stack of unsupervised bootstrap aggregating. First, its network structure is novel: each layer of the network is a group of mutually independent kk-centers clusterings. Second, its learning method is extremely simple: the kk centers of each clustering are only kk randomly selected examples from the training data; for small-scale data sets, the kk centers are further randomly reconstructed by a simple cyclic-shift operation. Experimental results on nonlinear dimensionality reduction show that the proposed method can learn abstract representations on both large-scale and small-scale problems, and meanwhile is much faster than deep neural networks on large-scale problems

    Multilayer bootstrap networks

    Full text link
    Multilayer bootstrap network builds a gradually narrowed multilayer nonlinear network from bottom up for unsupervised nonlinear dimensionality reduction. Each layer of the network is a nonparametric density estimator. It consists of a group of k-centroids clusterings. Each clustering randomly selects data points with randomly selected features as its centroids, and learns a one-hot encoder by one-nearest-neighbor optimization. Geometrically, the nonparametric density estimator at each layer projects the input data space to a uniformly-distributed discrete feature space, where the similarity of two data points in the discrete feature space is measured by the number of the nearest centroids they share in common. The multilayer network gradually reduces the nonlinear variations of data from bottom up by building a vast number of hierarchical trees implicitly on the original data space. Theoretically, the estimation error caused by the nonparametric density estimator is proportional to the correlation between the clusterings, both of which are reduced by the randomization steps.Comment: accepted for publication by Neural Network

    An Investigation of Universal Background Sparse Coding Based Speaker Verification on TIMIT

    Full text link
    In this paper, we propose a universal background model, named universal background sparse coding (UBSC), for speaker verification. The proposed method trains an ensemble of clusterings by data resampling, and produces sparse codes from the clusterings by one-nearest-neighbor optimization plus binarization. The main advantage of UBSC is that it does not suffer from local minima and does not make Gaussian assumptions on data distributions. We evaluated UBSC on a clean speech corpus---TIMIT. We used the cosine similarity and inner product similarity as the scoring methods of a trial. Experimental results show that UBSC is comparable to Gaussian mixture model

    Deep Ad-hoc Beamforming

    Full text link
    Far-field speech processing is an important and challenging problem. In this paper, we propose \textit{deep ad-hoc beamforming}, a deep-learning-based multichannel speech enhancement framework based on ad-hoc microphone arrays, to address the problem. It contains three novel components. First, it combines \textit{ad-hoc microphone arrays} with deep-learning-based multichannel speech enhancement, which reduces the probability of the occurrence of far-field acoustic environments significantly. Second, it groups the microphones around the speech source to a local microphone array by a supervised channel selection framework based on deep neural networks. Third, it develops a simple time synchronization framework to synchronize the channels that have different time delay. Besides the above novelties and advantages, the proposed model is also trained in a single-channel fashion, so that it can easily employ new development of speech processing techniques. Its test stage is also flexible in incorporating any number of microphones without retraining or modifying the framework. We have developed many implementations of the proposed framework and conducted an extensive experiment in scenarios where the locations of the speech sources are far-field, random, and blind to the microphones. Results on speech enhancement tasks show that our method outperforms its counterpart that works with linear microphone arrays by a considerable margin in both diffuse noise reverberant environments and point source noise reverberant environments

    Cosmological model-independent test of Ξ›\LambdaCDM with two-point diagnostic by the observational Hubble parameter data

    Full text link
    Aiming at exploring the nature of dark energy (DE), we use forty-three observational Hubble parameter data (OHD) in the redshift range 0<zβ©½2.360 < z \leqslant 2.36 to make a cosmological model-independent test of the Ξ›\LambdaCDM model with two-point Omh2(z2;z1)Omh^2(z_{2};z_{1}) diagnostic. In Ξ›\LambdaCDM model, with equation of state (EoS) w=βˆ’1w=-1, two-point diagnostic relation Omh2≑Ωmh2Omh^2 \equiv \Omega_m h^2 is tenable, where Ξ©m\Omega_m is the present matter density parameter, and hh is the Hubble parameter divided by 100 kmsβˆ’1Mpcβˆ’1\rm km s^{-1} Mpc^{-1}. We utilize two methods: the weighted mean and median statistics to bin the OHD to increase the signal-to-noise ratio of the measurements. The binning methods turn out to be promising and considered to be robust. By applying the two-point diagnostic to the binned data, we find that although the best-fit values of Omh2Omh^2 fluctuate as the continuous redshift intervals change, on average, they are continuous with being constant within 1 Οƒ\sigma confidence interval. Therefore, we conclude that the Ξ›\LambdaCDM model cannot be ruled out.Comment: 14 pages, 7 figures. arXiv admin note: text overlap with arXiv:1507.0251
    • …
    corecore